ai audit
Can AI be Auditable?
Verma, Himanshu, Padh, Kirtan, Thelisson, Eva
Auditability is defined as the capacity of AI systems to be independently assessed for compliance with ethical, legal, and technical standards throughout their lifecycle. The chapter explores how auditability is being formalized through emerging regulatory frameworks, such as the EU AI Act, which mandate documentation, risk assessments, and governance structures. It analyzes the diverse challenges facing AI auditability, including technical opacity, inconsistent documentation practices, lack of standardized audit tools and metrics, and conflicting principles within existing responsible AI frameworks. The discussion highlights the need for clear guidelines, harmonized international regulations, and robust socio-technical methodologies to operationalize auditability at scale. The chapter concludes by emphasizing the importance of multi-stakeholder collaboration and auditor empowerment in building an effective AI audit ecosystem. It argues that auditability must be embedded in AI development practices and governance infrastructures to ensure that AI systems are not only functional but also ethically and legally aligned.
- Oceania > Australia (0.28)
- Asia > China (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- (11 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (3 more...)
From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing
Artificial intelligence (AI) is increasingly intervening in our lives, raising widespread concern about its unintended and undeclared side effects. These developments have brought attention to the problem of AI auditing: the systematic evaluation and analysis of an AI system, its development, and its behavior relative to a set of predetermined criteria. Auditing can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing. It plays a critical role in providing assurances to various AI stakeholders, from developers to end users. Audits may, for instance, be used to verify that an algorithm complies with the law, is consistent with industry standards, and meets the developer's claimed specifications. However, there are many operational challenges to AI auditing that complicate its implementation. In this work, we examine a key operational issue in AI auditing: what type of access to an AI system is needed to perform a meaningful audit? Addressing this question has direct policy relevance, as it can inform AI audit guidelines and requirements. We begin by discussing the factors that auditors balance when determining the appropriate type of access, and unpack the benefits and drawbacks of four types of access. We conclude that, at minimum, black-box access -- providing query access to a model without exposing its internal implementation -- should be granted to auditors, as it balances concerns related to trade secrets, data privacy, audit standardization, and audit efficiency. We then suggest a framework for determining how much further access (in addition to black-box access) to grant auditors. We show that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Maryland > Baltimore (0.04)
- North America > Canada (0.04)
- (9 more...)
- Research Report (1.00)
- Overview (0.92)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Advancing AI Audits for Enhanced AI Governance
Ema, Arisa, Sato, Ryo, Hase, Tomoharu, Nakano, Masafumi, Kamimura, Shinji, Kitamura, Hiromu
As artificial intelligence (AI) is integrated into various services and systems in society, many companies and organizations have proposed AI principles, policies, and made the related commitments. Conversely, some have proposed the need for independent audits, arguing that the voluntary principles adopted by the developers and providers of AI services and systems insufficiently address risk. This policy recommendation summarizes the issues related to the auditing of AI services and systems and presents three recommendations for promoting AI auditing that contribute to sound AI governance. Recommendation1.Development of institutional design for AI audits. Recommendation2.Training human resources for AI audits. Recommendation3. Updating AI audits in accordance with technological progress. In this policy recommendation, AI is assumed to be that which recognizes and predicts data with the last chapter outlining how generative AI should be audited.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- North America > United States > New York (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Professional Services (0.93)
- (2 more...)
AI Audit: A Card Game to Reflect on Everyday AI Systems
Ali, Safinah, Kumar, Vishesh, Breazeal, Cynthia
An essential element of K-12 AI literacy is educating learners about the ethical and societal implications of AI systems. Previous work in AI ethics literacy have developed curriculum and classroom activities that engage learners in reflecting on the ethical implications of AI systems and developing responsible AI. There is little work in using game-based learning methods in AI literacy. Games are known to be compelling media to teach children about complex STEM concepts. In this work, we developed a competitive card game for middle and high school students called "AI Audit" where they play as AI start-up founders building novel AI-powered technology. Players can challenge other players with potential harms of their technology or defend their own businesses by features that mitigate these harms. The game mechanics reward systems that are ethically developed or that take steps to mitigate potential harms. In this paper, we present the game design, teacher resources for classroom deployment and early playtesting results. We discuss our reflections about using games as teaching tools for AI literacy in K-12 classrooms.
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Lebanon > Keserwan-Jbeil Governorate > Blat (0.04)
- Oceania > New Zealand (0.04)
- (3 more...)
- Research Report (1.00)
- Instructional Material (1.00)
- Leisure & Entertainment > Games (1.00)
- Education > Educational Setting > K-12 Education > Secondary School (0.55)
Biden administration asks public for help regulating AI systems like ChatGPT
Artificial Intelligence poses both risks and rewards, but developers should be weary of technologies that could threaten "scary" outcomes, AI technologist says. Federal regulators are asking the public for input on policies that would hold artificial intelligence (AI) systems accountable and help manage risks from the rapidly growing and powerful technology. As programs like ChatGPT gain popularity for their astounding ability to answer written questions with human-like responses, policymakers and tech experts are increasingly concerned with their potential for misuse, including how artificially-generated news reports can rapidly spread fabricated and false information. Now that ChatGPT has more than 100 million monthly active users, the government is beginning to study how these programs should be regulated. The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, solicited public feedback Tuesday as it works to develop policies to "ensure artificial intelligence (AI) systems work as claimed – and without causing harm."
- North America > United States > California (0.05)
- Europe > Germany > Berlin (0.05)
Do AI systems need to come with safety warnings?
Considering how powerful AI systems are, and the roles they increasingly play in helping to make high-stakes decisions about our lives, homes, and societies, they receive surprisingly little formal scrutiny. That's starting to change, thanks to the blossoming field of AI audits. When they work well, these audits allow us to reliably check how well a system is working and figure out how to mitigate any possible bias or harm. Famously, a 2018 audit of commercial facial recognition systems by AI researchers Joy Buolamwini and Timnit Gebru found that the system didn't recognize darker-skinned people as well as white people. For dark-skinned women, the error rate was up to 34%. As AI researcher Abeba Birhane points out in a new essay in Nature, the audit "instigated a body of critical work that has exposed the bias, discrimination, and oppressive nature of facial-analysis algorithms."
- Information Technology (0.33)
- Government (0.33)
What We Learned Auditing Sophisticated AI for Bias
A recently passed law in New York City requires audits for bias in AI-based hiring systems. AI systems fail frequently, and bias is often to blame. A recent sampling of headlines features sociological bias in generated images, a chatbot, and a virtual rapper. These examples of denigration and stereotyping are troubling and harmful, but what happens when the same types of systems are used in more sensitive applications? Leading scientific publications assert that algorithms used in healthcare in the U.S. diverted care away from millions of black people.
- North America > United States > New York (0.25)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Netherlands (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (0.70)
- Government > Regional Government > North America Government > United States Government (0.69)
AI Audits Are Coming to HR:
Companies are increasingly adopting automated systems to support, and sometimes even replace, humans in key steps of their hiring and employee management processes. According to Forbes, all Fortune 500 companies report using some form of automation in their HR pipelines. And it's not just large companies that are jumping on this trend: industry surveys have shown that at least 25% of all US-based companies plan to increase their use of automated systems in hiring and talent management over the next few years. From an economic perspective, it's not difficult to understand why companies are embracing automation. Automated systems offer a highly-scalable way to add efficiency to HR, and remove many critical bottlenecks when it comes to identifying and hiring new talent.
- North America > United States > New York (0.05)
- North America > United States > Maryland (0.05)
- North America > United States > Illinois (0.05)
- (3 more...)
- Government (1.00)
- Law > Statutes (0.50)
Problems with audits for bias in AI systems highlighted in research paper
Sasha Costanza-Chock, co-author of a research paper which looks into algorithmic audits, says there are many areas that require improvement in order to bolster the effectiveness of the process and reduce harms from bias in AI used in the real world, like facial recognition systems. Speaking about the Algorithmic Justice League paper on a recent episode of technology news podcast Marketplace, Costanza-Chock posits that it is very difficult to determine the effectiveness of algorithmic audits in the current dispensation because of non-disclosure agreements that bind first and second party auditors who have more access to the data and systems of companies they are auditing. Bias has been found in algorithms not only related to biometric matching, but adjacent areas like liveness detection, as well as unrelated AI applications. While putting together the research paper, which identifies emerging best practices as well as methods and tools for AI audits, the teams found out that a number of variations exist in the algorithmic auditing process as there is no harmonized standard or regulation on what auditors should look out for, said the co-author. While some of the audits focus on accuracy or fairness of training and sample data, some look at the privacy and security implications of the systems under audit, and only about half of the auditors they spoke to said they check to find out if companies have quality systems to enable users to channel complaints of AI bias harms in real-time.
To make AI fair, here's what we must learn to do
Beginning in 2013, the Dutch government used an algorithm to wreak havoc in the lives of 25,000 parents. The software was meant to predict which people were most likely to commit childcare-benefit fraud, but the government did not wait for proof before penalizing families and demanding that they pay back years of allowances. Families were flagged on the basis of'risk factors' such as having a low income or dual nationality. As a result, tens of thousands were needlessly impoverished, and more than 1,000 children were placed in foster care. From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works.
- Europe (0.71)
- North America > United States > New York (0.29)
- North America > United States > California (0.25)
- (2 more...)